feat: Add OpenCode backend + OpenAI-compatible endpoints + task-specific models#20
Draft
rothnic wants to merge 8 commits intoPickle-Pixel:mainfrom
Draft
feat: Add OpenCode backend + OpenAI-compatible endpoints + task-specific models#20rothnic wants to merge 8 commits intoPickle-Pixel:mainfrom
rothnic wants to merge 8 commits intoPickle-Pixel:mainfrom
Conversation
Add unified AgentBackend abstraction supporting both Claude Code and OpenCode: - AgentBackend abstract base class with run_job() interface - ClaudeBackend implementation for Claude Code CLI - OpenCodeBackend implementation for OpenCode CLI with MCP support - Backend detection and auto-selection logic - Backend-specific configuration in .env.example - Comprehensive tests for backend selection and output parsing - Wizard integration for backend setup and configuration - Launcher updates to use new backend abstraction This enables users to choose between Claude Code (default) and OpenCode for autonomous job application submission.
53c05f4 to
9a76346
Compare
Remove unused imports: - conftest.py: os, Path, MagicMock - test_backend_selection.py: os - test_parser_edge_cases.py: time, PropertyMock
- Revert .env.example to match origin/main - Add only OpenCode-related configuration options: - APPLY_BACKEND selection - APPLY_CLAUDE_MODEL, APPLY_OPENCODE_MODEL, APPLY_OPENCODE_AGENT - OpenCode MCP baseline requirements - Add scripts/test_opencode_apply.py for end-to-end testing - Checks prerequisites (OpenCode binary, MCP servers) - Runs applypilot apply --dry-run --url <job-url> - Provides clear output about what worked/failed
Add prerequisite checks for: - ~/.applypilot/profile.json (run applypilot init to create) - ~/.applypilot/resume.txt or resume.pdf These are required for applypilot apply to work.
Add --backend/-b option to explicitly select backend (claude or opencode).
This makes it easier to test OpenCode without relying solely on environment variables.
Changes:
- Add --backend/-b CLI option to apply command
- Display selected backend in output
- Pass backend_name to apply_main launcher function
Usage:
applypilot apply --backend opencode --url https://...
applypilot apply -b opencode --dry-run
c166b8f to
cc14e8b
Compare
SQL query 'apply_status != 'in_progress'' doesn't match NULL values. Changed to '(apply_status IS NULL OR apply_status != 'in_progress')' to properly find jobs that haven't been attempted yet.
OpenCode looks for .opencode/opencode.jsonc in the current working directory. Updated _list_mcp_servers() and run_job() to use config.APP_DIR (~/.applypilot) as the cwd so OpenCode can find the MCP server configuration.
OpenCode installs to ~/.opencode/bin/opencode by default, which may not be on PATH. Updated _find_binary to check this location if not found on PATH.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Add OpenCode Backend Support + OpenAI-Compatible LLM Endpoints + Task-Specific Models
Status: Draft - Ready for Review
Branch:
feat/multi-backend-support→mainRelated: Closes #[issue number]
Summary
This PR adds comprehensive backend flexibility to ApplyPilot:
Claude Code remains the DEFAULT backend. OpenCode is available as an opt-in alternative via
APPLY_BACKEND=opencode.Changes
1. Auto-Apply Backend Abstraction
New:
src/applypilot/apply/backends.py(802 lines)AgentBackendabstract base classClaudeBackend- Claude Code CLI integration (DEFAULT)OpenCodeBackend- OpenCode CLI integration (alternative)get_backend()factory withAPPLY_BACKENDenv var supportRefactored:
src/applypilot/apply/launcher.pyAPPLY_BACKENDunset2. OpenAI-Compatible LLM Endpoints
src/applypilot/llm.pyLLM_URLenvironment variable supportLLM_URL> Gemini > OpenAI3. Task-Specific Model Configuration (NEW)
src/applypilot/llm.pyDEFAULT_FLASH_MODEL = "gemini-2.0-flash"(most tasks)DEFAULT_PRO_MODEL = "gemini-2.5-pro"(high-quality tasks)get_client_for_task(task_name)Environment Variables:
Priority Order:
TASK_MODELenv var (e.g.,TAILORING_MODEL=gpt-4)LLM_MODELenv var (generic override)TASK_MODEL_DEFAULTSOpenAI Model Mapping:
gemini-2.0-flash→gpt-5-mini(default for most tasks)gemini-2.5-pro→gpt-5(for high-quality tasks)4. Test Suite
tests/test_backend_selection.py(30 tests)tests/test_provider_routing.py(19 tests)5. Documentation Updates
README.md- Claude default clearly stated, OpenCode as alternative.env.example- All new env vars documentedopencode.json- MCP parity configurationConfiguration Examples
Default (Claude Code - no changes needed)
# No configuration needed - Claude is default applypilot applyUse OpenCode (alternative)
Use OpenAI-Compatible Gateway
Task-Specific Models
Personal Use Case: OpenAI-Compatible Router
The author personally uses an OpenAI-compatible AI router with fallback support, which enables utilizing multiple LLM subscriptions (GitHub Copilot, Kimi Code, ChatGPT) through a unified endpoint. Any OpenAI-compatible endpoint should work with these changes.
Example router capabilities:
https://router.example.com/v1Future Considerations
Agent Framework with Memory Backend:
A future enhancement could leverage an agent framework that supports a memory backend to better support learning preferences:
Standard OpenAI Variables:
Currently uses
LLM_URL,LLM_API_KEY,LLM_MODEL. A future refactor could consider using standard OpenAI SDK environment variables:OPENAI_BASE_URLinstead ofLLM_URLOPENAI_API_KEYinstead ofLLM_API_KEYLLM-Agnostic Layer:
Future work should consider migrating to an LLM-agnostic layer with a simpler interface so any provider can be supported with a more uniform configuration.
Note: Anthropic API-compatible endpoint support would require different environment variables and is not included in this PR.
Testing
Migration Guide
No migration required. Existing users continue using Claude Code by default without any changes.
To try OpenCode:
To use a gateway:
To customize models:
Environment Variables Reference
LLM Provider Selection (priority order):
LLM_URL+LLM_API_KEY- OpenAI-compatible gatewayGEMINI_API_KEY- Google Gemini (recommended default)OPENAI_API_KEY- OpenAI directAuto-Apply Backend:
APPLY_BACKEND-claude(default) oropencodeAPPLY_CLAUDE_MODEL- Default:haikuAPPLY_OPENCODE_MODEL- Fallback toLLM_MODELorgpt-4o-miniAPPLY_OPENCODE_AGENT- Passed as--agenttoopencode runTask-Specific Models:
SCORING_MODEL- Fast model for job scoring (default: gemini-2.0-flash/gpt-5-mini)TAILORING_MODEL- High-quality for resume writing (default: gemini-2.5-pro/gpt-5)COVER_LETTER_MODEL- Standard for cover lettersJD_PARSE_MODEL- Fast for JD extractionRESUME_MATCH_MODEL- Fast for gap analysisVALIDATION_MODEL- Fast for validation checksENRICHMENT_MODEL- Fast for job enrichmentSMART_EXTRACT_MODEL- Fast for smart extractionChecklist
Architecture Decisions
APPLY_BACKEND=opencodeReady for review when: